Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 36
Filtrar
1.
Hear Res ; 437: 108856, 2023 09 15.
Artigo em Inglês | MEDLINE | ID: mdl-37531847

RESUMO

The relative contributions of superior temporal vs. inferior frontal and parietal networks to recognition of speech in a background of competing speech remain unclear, although the contributions themselves are well established. Here, we use fMRI with spectrotemporal modulation transfer function (ST-MTF) modeling to examine the speech information represented in temporal vs. frontoparietal networks for two speech recognition tasks with and without a competing talker. Specifically, 31 listeners completed two versions of a three-alternative forced choice competing speech task: "Unison" and "Competing", in which a female (target) and a male (competing) talker uttered identical or different phrases, respectively. Spectrotemporal modulation filtering (i.e., acoustic distortion) was applied to the two-talker mixtures and ST-MTF models were generated to predict brain activation from differences in spectrotemporal-modulation distortion on each trial. Three cortical networks were identified based on differential patterns of ST-MTF predictions and the resultant ST-MTF weights across conditions (Unison, Competing): a bilateral superior temporal (S-T) network, a frontoparietal (F-P) network, and a network distributed across cortical midline regions and the angular gyrus (M-AG). The S-T network and the M-AG network responded primarily to spectrotemporal cues associated with speech intelligibility, regardless of condition, but the S-T network responded to a greater range of temporal modulations suggesting a more acoustically driven response. The F-P network responded to the absence of intelligibility-related cues in both conditions, but also to the absence (presence) of target-talker (competing-talker) vocal pitch in the Competing condition, suggesting a generalized response to signal degradation. Task performance was best predicted by activation in the S-T and F-P networks, but in opposite directions (S-T: more activation = better performance; F-P: vice versa). Moreover, S-T network predictions were entirely ST-MTF mediated while F-P network predictions were ST-MTF mediated only in the Unison condition, suggesting an influence from non-acoustic sources (e.g., informational masking) in the Competing condition. Activation in the M-AG network was weakly positively correlated with performance and this relation was entirely superseded by those in the S-T and F-P networks. Regarding contributions to speech recognition, we conclude: (a) superior temporal regions play a bottom-up, perceptual role that is not qualitatively dependent on the presence of competing speech; (b) frontoparietal regions play a top-down role that is modulated by competing speech and scales with listening effort; and (c) performance ultimately relies on dynamic interactions between these networks, with ancillary contributions from networks not involved in speech processing per se (e.g., the M-AG network).


Assuntos
Percepção da Fala , Fala , Masculino , Humanos , Feminino , Percepção da Fala/fisiologia , Cognição , Sinais (Psicologia) , Acústica , Inteligibilidade da Fala , Mascaramento Perceptivo/fisiologia
2.
J Cogn Neurosci ; 34(11): 2189-2214, 2022 10 01.
Artigo em Inglês | MEDLINE | ID: mdl-36007073

RESUMO

It has long been known that listening to speech activates inferior frontal (pre-)motor regions in addition to a more dorsal premotor site (dPM). Recent work shows that dPM, located adjacent to laryngeal motor cortex, responds to low-level acoustic speech cues including vocal pitch, and the speech envelope, in addition to higher-level cues such as phoneme categories. An emerging hypothesis is that dPM is part of a general auditory-guided laryngeal control circuit that plays a role in producing speech and other voluntary auditory-vocal behaviors. We recently reported a study in which dPM responded to vocal pitch during a degraded speech recognition task, but only when speech was rated as unintelligible; dPM was more robustly modulated by the categorical difference between intelligible and unintelligible speech. Contrary to the general auditory-vocal hypothesis, this suggests intelligible speech is the primary driver of dPM. However, the same pattern of results was observed in pitch-sensitive auditory cortex. Crucially, vocal pitch was not relevant to the intelligibility judgment task, which may have facilitated processing of phonetic information at the expense of vocal pitch cues. The present fMRI study (n = 25) tests the hypothesis that, for a multitalker task that emphasizes pitch for talker segregation, left dPM and pitch-sensitive auditory regions will respond to vocal pitch regardless of overall speech intelligibility. This would suggest that pitch processing is indeed a primary concern of this circuit, apparent during perception only when the task demands it. Spectrotemporal modulation distortion was used to independently modulate vocal pitch and phonetic content in two-talker (male/female) utterances across two conditions (Competing, Unison), only one of which required pitch-based segregation (Competing). A Bayesian hierarchical drift-diffusion model was used to predict speech recognition performance from patterns of spectrotemporal distortion imposed on each trial. The model's drift rate parameter, a d'-like measure of performance, was strongly associated with vocal pitch for Competing but not Unison. Using a second Bayesian hierarchical model, we identified regions where behaviorally relevant acoustic features were related to fMRI activation in dPM. We regressed the hierarchical drift-diffusion model's posterior predictions of trial-wise drift rate, reflecting the relative presence or absence of behaviorally relevant acoustic features from trial to trial, against trial-wise activation amplitude. A significant positive association with overall drift rate, reflecting vocal pitch and phonetic cues related to overall intelligibility, was observed in left dPM and bilateral auditory cortex in both conditions. A significant positive association with "pitch-restricted" drift rate, reflecting only the relative presence or absence of behaviorally relevant pitch cues, regardless of the presence or absence of phonetic content (intelligibility), was observed in left dPM, but only in the Competing condition. Interestingly, the same effect was observed in bilateral auditory cortex but in both conditions. A post hoc mediation analysis ruled out the possibility that decision load was responsible for the observed pitch effects. These findings suggest that processing of vocal pitch is a primary concern of the auditory-cortex-dPM circuit, although during perception core pitch, processing is carried out by auditory cortex with a potential modulatory influence from dPM.


Assuntos
Córtex Auditivo , Córtex Motor , Percepção da Fala , Estimulação Acústica/métodos , Córtex Auditivo/diagnóstico por imagem , Córtex Auditivo/fisiologia , Teorema de Bayes , Feminino , Humanos , Masculino , Percepção da Altura Sonora/fisiologia , Fala/fisiologia , Percepção da Fala/fisiologia
3.
Neurotoxicology ; 84: 73-83, 2021 05.
Artigo em Inglês | MEDLINE | ID: mdl-33667563

RESUMO

It is well-established that aminoglycoside antibiotics are ototoxic, and the toxicity can be drastically enhanced by the addition of loop diuretics, resulting in rapid irreversible hair cell damage. Using both electrophysiologic and morphological approaches, we investigated whether this combined treatment affected the cochlea at the region of ribbon synapses, consequently resulting in auditory synaptopathy. A series of varied gentamicin and furosemide doses were applied to C57BL/6 mice, and auditory brainstem responses (ABR) and distortion product otoacoustic emissions (DPOAE) were measured to assess ototoxic damage within the cochlea. In brief, the treatment effectively induced cochlear damage and promoted a certain reorganization of synaptic ribbons, while a reduction of ribbon density only occurred after a substantial loss of outer hair cells. In addition, both the ABR wave I amplitude and the ribbon density were elevated in low-dose treatment conditions, but a correlation between the two events was not significant for individual cochleae. In sum, combined gentamicin and furosemide treatment, at titrated doses below those that produce hair cell damage, typically triggers synaptic plasticity rather than a permanent synaptic loss.


Assuntos
Antibacterianos/administração & dosagem , Cóclea/efeitos dos fármacos , Furosemida/administração & dosagem , Gentamicinas/administração & dosagem , Plasticidade Neuronal/efeitos dos fármacos , Sinapses/efeitos dos fármacos , Animais , Antibacterianos/toxicidade , Cóclea/patologia , Cóclea/fisiologia , Relação Dose-Resposta a Droga , Combinação de Medicamentos , Feminino , Furosemida/toxicidade , Gentamicinas/toxicidade , Masculino , Camundongos , Camundongos Endogâmicos C57BL , Plasticidade Neuronal/fisiologia , Sinapses/patologia , Sinapses/fisiologia
4.
J Speech Lang Hear Res ; 63(7): 2141-2161, 2020 07 20.
Artigo em Inglês | MEDLINE | ID: mdl-32603618

RESUMO

Purpose Age-related declines in auditory temporal processing and cognition make older listeners vulnerable to interference from competing speech. This vulnerability may be increased in older listeners with sensorineural hearing loss due to additional effects of spectral distortion and accelerated cognitive decline. The goal of this study was to uncover differences between older hearing-impaired (OHI) listeners and older normal-hearing (ONH) listeners in the perceptual encoding of competing speech signals. Method Age-matched groups of 10 OHI and 10 ONH listeners performed the coordinate response measure task with a synthetic female target talker and a male competing talker at a target-to-masker ratio of +3 dB. Individualized gain was provided to OHI listeners. Each listener completed 50 baseline and 800 "bubbles" trials in which randomly selected segments of the speech modulation power spectrum (MPS) were retained on each trial while the remainder was filtered out. Average performance was fixed at 50% correct by adapting the number of segments retained. Multinomial regression was used to estimate weights showing the regions of the MPS associated with performance (a "classification image" or CImg). Results The CImg weights were significantly different between the groups in two MPS regions: a region encoding the shared phonetic content of the two talkers and a region encoding the competing (male) talker's voice. The OHI listeners demonstrated poorer encoding of the phonetic content and increased vulnerability to interference from the competing talker. Individual differences in CImg weights explained over 75% of the variance in baseline performance in the OHI listeners, whereas differences in high-frequency pure-tone thresholds explained only 10%. Conclusion Suprathreshold deficits in the encoding of low- to mid-frequency (~5-10 Hz) temporal modulations-which may reflect poorer "dip listening"-and auditory grouping at a perceptual and/or cognitive level are responsible for the relatively poor performance of OHI versus ONH listeners on a different-gender competing speech task. Supplemental Material https://doi.org/10.23641/asha.12568472.


Assuntos
Perda Auditiva Neurossensorial , Perda Auditiva , Percepção da Fala , Idoso , Limiar Auditivo , Feminino , Audição , Testes Auditivos , Humanos , Masculino , Mascaramento Perceptivo
5.
J Acoust Soc Am ; 147(1): 273, 2020 01.
Artigo em Inglês | MEDLINE | ID: mdl-32006979

RESUMO

Masked sentence perception by hearing-aid users is strongly correlated with three variables: (1) the ability to hear phonetic details as estimated by the identification of syllable constituents in quiet or in noise; (2) the ability to use situational context that is extrinsic to the speech signal; and (3) the ability to use inherent context provided by the speech signal itself. This approach is called "the syllable-constituent, contextual theory of speech perception" and is supported by the performance of 57 hearing-aid users in the identification of 109 syllable constituents presented in a background of 12-talker babble and the identification of words in naturally spoken sentences presented in the same babble. A simple mathematical model, inspired in large part by Boothroyd and Nittrouer [(1988). J. Acoust. Soc. Am. 84, 101-114] and Fletcher [Allen (1996) J. Acoust. Soc. Am. 99, 1825-1834], predicts sentence perception from listeners' abilities to recognize isolated syllable constituents and to benefit from context. When the identification accuracy of syllable constituents is greater than about 55%, individual differences in context utilization play a minor role in determining the sentence scores. As syllable-constituent scores fall below 55%, individual differences in context utilization play an increasingly greater role in determining sentence scores. Implications for hearing-aid design goals and fitting procedures are discussed.


Assuntos
Ruído , Pessoas com Deficiência Auditiva/psicologia , Fonética , Percepção da Fala , Estimulação Acústica , Idoso , Idoso de 80 Anos ou mais , Feminino , Auxiliares de Audição , Humanos , Masculino , Pessoa de Meia-Idade , Mascaramento Perceptivo , Reconhecimento Psicológico
6.
J Acoust Soc Am ; 143(1): 378, 2018 01.
Artigo em Inglês | MEDLINE | ID: mdl-29390743

RESUMO

Individuals with hearing loss are thought to be less sensitive to the often subtle variations of acoustic information that support auditory stream segregation. Perceptual segregation can be influenced by differences in both the spectral and temporal characteristics of interleaved stimuli. The purpose of this study was to determine what stimulus characteristics support sequential stream segregation by normal-hearing and hearing-impaired listeners. Iterated rippled noises (IRNs) were used to assess the effects of tonality, spectral resolvability, and hearing loss on the perception of auditory streams in two pitch regions, corresponding to 250 and 1000 Hz. Overall, listeners with hearing loss were significantly less likely to segregate alternating IRNs into two auditory streams than were normally hearing listeners. Low pitched IRNs were generally less likely to segregate into two streams than were higher pitched IRNs. High-pass filtering was a strong contributor to reduced segregation for both groups. The tonality, or pitch strength, of the IRNs had a significant effect on streaming, but the effect was similar for both groups of subjects. These data demonstrate that stream segregation is influenced by many factors including pitch differences, pitch region, spectral resolution, and degree of stimulus tonality, in addition to the loss of auditory sensitivity.


Assuntos
Percepção Auditiva , Sinais (Psicologia) , Perda Auditiva/psicologia , Pessoas com Deficiência Auditiva/psicologia , Estimulação Acústica , Adulto , Idoso , Limiar Auditivo , Estudos de Casos e Controles , Feminino , Audição , Perda Auditiva/diagnóstico , Perda Auditiva/fisiopatologia , Humanos , Masculino , Pessoa de Meia-Idade , Percepção da Altura Sonora , Fatores de Tempo
7.
Ear Hear ; 39(3): 583-593, 2018.
Artigo em Inglês | MEDLINE | ID: mdl-29135685

RESUMO

OBJECTIVES: The medial olivocochlear (MOC) efferent system can modify cochlear function to improve sound detection in noise, but its role in speech perception in noise is unclear. The purpose of this study was to determine the association between MOC efferent activity and performance on two speech-in-noise tasks at two signal-to-noise ratios (SNRs). It was hypothesized that efferent activity would be more strongly correlated with performance at the more challenging SNR, relative to performance at the less challenging SNR. DESIGN: Sixteen adults aged 35 to 73 years participated. Subjects had pure-tone averages ≤25 dB HL and normal middle ear function. High-frequency pure-tone averages were computed across 3000 to 8000 Hz and ranged from 6.3 to 48.8 dB HL. Efferent activity was assessed using contralateral suppression of transient-evoked otoacoustic emissions (TEOAEs) measured in right ears, and MOC activation was achieved by presenting broadband noise to left ears. Contralateral suppression was expressed as the decibel change in TEOAE magnitude obtained with versus without the presence of the broadband noise. TEOAE responses were also examined for middle ear muscle reflex activation and synchronous spontaneous otoacoustic emissions (SSOAEs). Speech-in-noise perception was assessed using the closed-set coordinate response measure word recognition task and the open-set Institute of Electrical and Electronics Engineers sentence task. Speech and noise were presented to right ears at two SNRs. Performance on each task was scored as percent correct. Associations between contralateral suppression and speech-in-noise performance were quantified using partial rank correlational analyses, controlling for the variables age and high-frequency pure-tone average. RESULTS: One subject was excluded due to probable middle ear muscle reflex activation. Subjects showed a wide range of contralateral suppression values, consistent with previous reports. Three subjects with SSOAEs had similar contralateral suppression results as subjects without SSOAEs. The magnitude of contralateral suppression was not significantly correlated with speech-in-noise performance on either task at a single SNR (p > 0.05), contrary to hypothesis. However, contralateral suppression was significantly correlated with the slope of the psychometric function, computed as the difference between performance levels at the two SNRs divided by 3 (decibel difference between the 2 SNRs) for the coordinate response measure task (partial rs = 0.59; p = 0.04) and for the Institute of Electrical and Electronics Engineers task (partial rs = 0.60; p = 0.03). CONCLUSIONS: In a group of primarily older adults with normal hearing or mild hearing loss, olivocochlear efferent activity assessed using contralateral suppression of TEOAEs was not associated with speech-in-noise performance at a single SNR. However, auditory efferent activity appears to be associated with the slope of the psychometric function for both a word and sentence recognition task in noise. Results suggest that individuals with stronger MOC efferent activity tend to be more responsive to changes in SNR, where small increases in SNR result in better speech-in-noise performance relative to individuals with weaker MOC efferent activity. Additionally, the results suggest that the slope of the psychometric function may be a more useful metric than performance at a single SNR when examining the relationship between speech recognition in noise and MOC efferent activity.


Assuntos
Cóclea/fisiologia , Audição/fisiologia , Ruído , Núcleo Olivar/fisiologia , Percepção da Fala/fisiologia , Adulto , Idoso , Potenciais Evocados Auditivos , Feminino , Perda Auditiva/fisiopatologia , Humanos , Masculino , Pessoa de Meia-Idade , Emissões Otoacústicas Espontâneas , Mascaramento Perceptivo , Psicometria
8.
J Acoust Soc Am ; 141(4): 2933, 2017 04.
Artigo em Inglês | MEDLINE | ID: mdl-28464618

RESUMO

The abilities of 59 adult hearing-aid users to hear phonetic details were assessed by measuring their abilities to identify syllable constituents in quiet and in differing levels of noise (12-talker babble) while wearing their aids. The set of sounds consisted of 109 frequently occurring syllable constituents (45 onsets, 28 nuclei, and 36 codas) spoken in varied phonetic contexts by eight talkers. In nominal quiet, a speech-to-noise ratio (SNR) of 40 dB, scores of individual listeners ranged from about 23% to 85% correct. Averaged over the range of SNRs commonly encountered in noisy situations, scores of individual listeners ranged from about 10% to 71% correct. The scores in quiet and in noise were very strongly correlated, R = 0.96. This high correlation implies that common factors play primary roles in the perception of phonetic details in quiet and in noise. Otherwise said, hearing-aid users' problems perceiving phonetic details in noise appear to be tied to their problems perceiving phonetic details in quiet and vice versa.


Assuntos
Correção de Deficiência Auditiva/instrumentação , Auxiliares de Audição , Perda Auditiva Neurossensorial/reabilitação , Ruído/efeitos adversos , Mascaramento Perceptivo , Pessoas com Deficiência Auditiva/reabilitação , Acústica da Fala , Percepção da Fala , Qualidade da Voz , Estimulação Acústica , Adulto , Idoso , Idoso de 80 Anos ou mais , Audiometria de Tons Puros , Audiometria da Fala , Limiar Auditivo , Estimulação Elétrica , Feminino , Audição , Perda Auditiva Neurossensorial/diagnóstico , Perda Auditiva Neurossensorial/fisiopatologia , Perda Auditiva Neurossensorial/psicologia , Humanos , Masculino , Pessoa de Meia-Idade , Pessoas com Deficiência Auditiva/psicologia , Fonética , Inteligibilidade da Fala
10.
J Acoust Soc Am ; 140(3): 2027, 2016 09.
Artigo em Inglês | MEDLINE | ID: mdl-27914370

RESUMO

Contralateral suppression of otoacoustic emissions (OAEs) is frequently used to assess the medial olivocochlear (MOC) efferent system, and may have clinical utility. However, OAEs are weak or absent in hearing-impaired ears, so little is known about MOC function in the presence of hearing loss. A potential alternative measure is contralateral suppression of the auditory steady-state response (ASSR) because ASSRs are measurable in many hearing-impaired ears. This study compared contralateral suppression of both transient-evoked otoacoustic emissions (TEOAEs) and ASSRs in a group of ten primarily older adults with either normal hearing or mild sensorineural hearing loss. Responses were elicited using 75-dB peak sound pressure level clicks. The MOC was activated using contralateral broadband noise at 60 dB sound pressure level. Measurements were made concurrently to ensure a consistent attentional state between the two measures. The magnitude of contralateral suppression of ASSRs was significantly larger than contralateral suppression of TEOAEs. Both measures usually exhibited high test-retest reliability within a session. However, there was no significant correlation between the magnitude of contralateral suppression of TEOAEs and of ASSRs. Further work is needed to understand the role of the MOC in contralateral suppression of ASSRs.


Assuntos
Emissões Otoacústicas Espontâneas , Estimulação Acústica , Adulto , Idoso , Cóclea , Surdez , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Reprodutibilidade dos Testes
11.
Integr Zool ; 10(1): 29-37, 2015 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-24919543

RESUMO

In this paper we describe the masking of pure tones in humans and birds by manmade noises and show that similar ideas can be applied when considering the potential effects of noise on fishes, as well as other aquatic vertebrates. Results from many studies on humans and birds, both in the field and in the laboratory, show that published critical ratios can be used to predict the masked thresholds for pure tones when maskers consist of complex manmade and natural noises. We argue from these data that a single, simple measure, the species critical ratio, can be used to estimate the effect of manmade environmental noises on the perception of communication and other biologically relevant sounds. We also reason that if this principle holds for species as diverse as humans and birds, it probably also applies for all other vertebrates, including fishes.


Assuntos
Peixes/fisiologia , Ruído , Mascaramento Perceptivo , Animais , Aves/fisiologia , Humanos , Vocalização Animal
12.
Semin Hear ; 36(4): 273-83, 2015 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-27587914

RESUMO

Following an overview of theoretical issues in speech-perception training and of previous efforts to enhance hearing aid use through training, a multisite study, designed to evaluate the efficacy of two types of computerized speech-perception training for adults who use hearing aids, is described. One training method focuses on the identification of 109 syllable constituents (45 onsets, 28 nuclei, and 36 codas) in quiet and in noise, and on the perception of words in sentences presented in various levels of noise. In a second type of training, participants listen to 6- to 7-minute narratives in noise and are asked several questions about each narrative. Two groups of listeners are trained, each using one of these types of training, performed in a laboratory setting. The training for both groups is preceded and followed by a series of speech-perception tests. Subjects listen in a sound field while wearing their hearing aids at their usual settings. The training continues over 15 to 20 visits, with subjects completing at least 30 hours of focused training with one of the two methods. The two types of training are described in detail, together with a summary of other perceptual and cognitive measures obtained from all participants.

13.
J Acoust Soc Am ; 136(1): 301-16, 2014 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-24993215

RESUMO

Poor speech understanding in noise by hearing-impaired (HI) listeners is only partly explained by elevated audiometric thresholds. Suprathreshold-processing impairments such as reduced temporal or spectral resolution or temporal fine-structure (TFS) processing ability might also contribute. Although speech contains dynamic combinations of temporal and spectral modulation and TFS content, these capabilities are often treated separately. Modulation-depth detection thresholds for spectrotemporal modulation (STM) applied to octave-band noise were measured for normal-hearing and HI listeners as a function of temporal modulation rate (4-32 Hz), spectral ripple density [0.5-4 cycles/octave (c/o)] and carrier center frequency (500-4000 Hz). STM sensitivity was worse than normal for HI listeners only for a low-frequency carrier (1000 Hz) at low temporal modulation rates (4-12 Hz) and a spectral ripple density of 2 c/o, and for a high-frequency carrier (4000 Hz) at a high spectral ripple density (4 c/o). STM sensitivity for the 4-Hz, 4-c/o condition for a 4000-Hz carrier and for the 4-Hz, 2-c/o condition for a 1000-Hz carrier were correlated with speech-recognition performance in noise after partialling out the audiogram-based speech-intelligibility index. Poor speech-reception and STM-detection performance for HI listeners may be related to a combination of reduced frequency selectivity and a TFS-processing deficit limiting the ability to track spectral-peak movements.


Assuntos
Ruído/efeitos adversos , Mascaramento Perceptivo , Pessoas com Deficiência Auditiva/psicologia , Inteligibilidade da Fala , Percepção da Fala , Estimulação Acústica , Adulto , Audiometria , Limiar Auditivo , Compreensão , Sinais (Psicologia) , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Psicoacústica , Espectrografia do Som , Fatores de Tempo , Adulto Jovem
15.
J Am Acad Audiol ; 24(4): 274-92, 2013 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-23636209

RESUMO

BACKGROUND: It is widely believed that suprathreshold distortions in auditory processing contribute to the speech recognition deficits experienced by hearing-impaired (HI) listeners in noise. Damage to outer hair cells and attendant reductions in peripheral compression and frequency selectivity may contribute to these deficits. In addition, reduced access to temporal fine structure (TFS) information in the speech waveform may play a role. PURPOSE: To examine how measures of peripheral compression, frequency selectivity, and TFS sensitivity relate to speech recognition performance by HI listeners. To determine whether distortions in processing reflected by these psychoacoustic measures are more closely associated with speech deficits in steady-state or modulated noise. RESEARCH DESIGN: Normal-hearing (NH) and HI listeners were tested on tasks examining frequency selectivity (notched-noise task), peripheral compression (temporal masking curve task), and sensitivity to TFS information (frequency modulation [FM] detection task) in the presence of random amplitude modulation. Performance was tested at 500, 1000, 2000, and 4000 Hz at several presentation levels. The same listeners were tested on sentence recognition in steady-state and modulated noise at several signal-to-noise ratios. STUDY SAMPLE: Ten NH and 18 HI listeners were tested. NH listeners ranged in age from 36 to 80 yr (M = 57.6). For HI listeners, ages ranged from 58 to 87 yr (M = 71.8). RESULTS: Scores on the FM detection task at 1 and 2 kHz were significantly correlated with speech scores in both noise conditions. Frequency selectivity and compression measures were not as clearly associated with speech performance. Speech Intelligibility Index (SII) analyses indicated only small differences in speech audibility across subjects for each signal-to-noise ratio (SNR) condition that would predict differences in speech scores no greater than 10% at a given SNR. Actual speech scores varied by as much as 80% across subjects. CONCLUSIONS: The results suggest that distorted processing of audible speech cues was a primary factor accounting for differences in speech scores across subjects and that reduced ability to use TFS cues may be an important component of this distortion. The influence of TFS cues on speech scores was comparable in steady-state and modulated noise. Speech recognition was not related to audibility, represented by the SII, once high-frequency sensitivity differences across subjects (beginning at 5 kHz) were removed statistically. This might indicate that high-frequency hearing loss is associated with distortions in processing in lower-frequency regions.


Assuntos
Limiar Auditivo/fisiologia , Perda Auditiva Neurossensorial/fisiopatologia , Audição/fisiologia , Pessoas com Deficiência Auditiva , Percepção da Fala/fisiologia , Adulto , Idoso , Idoso de 80 Anos ou mais , Audiometria , Feminino , Perda Auditiva Neurossensorial/diagnóstico , Humanos , Masculino , Pessoa de Meia-Idade , Ruído , Mascaramento Perceptivo/fisiologia , Psicoacústica , Testes de Discriminação da Fala
16.
J Am Acad Audiol ; 24(4): 293-306, 2013 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-23636210

RESUMO

BACKGROUND: A model that can accurately predict speech intelligibility for a given hearing-impaired (HI) listener would be an important tool for hearing-aid fitting or hearing-aid algorithm development. Existing speech-intelligibility models do not incorporate variability in suprathreshold deficits that are not well predicted by classical audiometric measures. One possible approach to the incorporation of such deficits is to base intelligibility predictions on sensitivity to simultaneously spectrally and temporally modulated signals. PURPOSE: The likelihood of success of this approach was evaluated by comparing estimates of spectrotemporal modulation (STM) sensitivity to speech intelligibility and to psychoacoustic estimates of frequency selectivity and temporal fine-structure (TFS) sensitivity across a group of HI listeners. RESEARCH DESIGN: The minimum modulation depth required to detect STM applied to an 86 dB SPL four-octave noise carrier was measured for combinations of temporal modulation rate (4, 12, or 32 Hz) and spectral modulation density (0.5, 1, 2, or 4 cycles/octave). STM sensitivity estimates for individual HI listeners were compared to estimates of frequency selectivity (measured using the notched-noise method at 500, 1000, 2000, and 4000 Hz), TFS processing ability (2 Hz frequency-modulation detection thresholds for 500, 1000, 2000, and 4000 Hz carriers) and sentence intelligibility in noise (at a 0 dB signal-to-noise ratio) that were measured for the same listeners in a separate study. STUDY SAMPLE: Eight normal-hearing (NH) listeners and 12 listeners with a diagnosis of bilateral sensorineural hearing loss participated. DATA COLLECTION AND ANALYSIS: STM sensitivity was compared between NH and HI listener groups using a repeated-measures analysis of variance. A stepwise regression analysis compared STM sensitivity for individual HI listeners to audiometric thresholds, age, and measures of frequency selectivity and TFS processing ability. A second stepwise regression analysis compared speech intelligibility to STM sensitivity and the audiogram-based Speech Intelligibility Index. RESULTS: STM detection thresholds were elevated for the HI listeners, but only for low rates and high densities. STM sensitivity for individual HI listeners was well predicted by a combination of estimates of frequency selectivity at 4000 Hz and TFS sensitivity at 500 Hz but was unrelated to audiometric thresholds. STM sensitivity accounted for an additional 40% of the variance in speech intelligibility beyond the 40% accounted for by the audibility-based Speech Intelligibility Index. CONCLUSIONS: Impaired STM sensitivity likely results from a combination of a reduced ability to resolve spectral peaks and a reduced ability to use TFS information to follow spectral-peak movements. Combining STM sensitivity estimates with audiometric threshold measures for individual HI listeners provided a more accurate prediction of speech intelligibility than audiometric measures alone. These results suggest a significant likelihood of success for an STM-based model of speech intelligibility for HI listeners.


Assuntos
Perda Auditiva Neurossensorial/diagnóstico , Perda Auditiva Neurossensorial/fisiopatologia , Audição/fisiologia , Espectrografia do Som , Testes de Discriminação da Fala , Adulto , Idoso , Idoso de 80 Anos ou mais , Audiometria , Limiar Auditivo/fisiologia , Feminino , Perda Auditiva Bilateral/diagnóstico , Perda Auditiva Bilateral/fisiopatologia , Humanos , Masculino , Pessoa de Meia-Idade , Modelos Biológicos , Valor Preditivo dos Testes , Sensibilidade e Especificidade
17.
J Assoc Res Otolaryngol ; 14(1): 125-37, 2013 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-23007720

RESUMO

Vowel identification is largely dependent on listeners' access to the frequency of two or three peaks in the amplitude spectrum. Earlier work has demonstrated that, whereas normal-hearing listeners can identify harmonic complexes with vowel-like spectral shapes even with very little amplitude contrast between "formant" components and remaining harmonic components, listeners with hearing loss require greater amplitude differences. This is likely the result of the poor frequency resolution that often accompanies hearing loss. Here, we describe an additional acoustic dimension for emphasizing formant versus non-formant harmonics that may supplement amplitude contrast information. The purpose of this study was to determine whether listeners were able to identify "vowel-like" sounds using temporal (component phase) contrast, which may be less affected by cochlear loss than spectral cues, and whether overall identification improves when congruent temporal and spectral information are provided together. Five normal-hearing and five hearing-impaired listeners identified three vowels over many presentations. Harmonics representing formant peaks were varied in amplitude, phase, or a combination of both. In addition to requiring less amplitude contrast, normal-hearing listeners could accurately identify the sounds with less phase contrast than required by people with hearing loss. However, both normal-hearing and hearing-impaired groups demonstrated the ability to identify vowel-like sounds based solely on component phase shifts, with no amplitude contrast information, and they also showed improved performance when congruent phase and amplitude cues were combined. For nearly all listeners, the combination of spectral and temporal information improved identification in comparison to either dimension alone.


Assuntos
Perda Auditiva/fisiopatologia , Fonética , Percepção da Fala/fisiologia , Estimulação Acústica , Adulto , Limiar Auditivo/fisiologia , Humanos , Pessoa de Meia-Idade , Testes de Discriminação da Fala/métodos
19.
Ear Hear ; 33(2): 231-8, 2012.
Artigo em Inglês | MEDLINE | ID: mdl-22367094

RESUMO

OBJECTIVE: To investigate the contributions of energetic and informational masking to neural encoding and perception in noise, using oddball discrimination and sentence recognition tasks. DESIGN: P3 auditory evoked potential, behavioral discrimination, and sentence recognition data were recorded in response to speech and tonal signals presented to nine normal-hearing adults. Stimuli were presented at a signal to noise ratio of -3 dB in four background conditions: quiet, continuous noise, intermittent noise, and four-talker babble. RESULTS: Responses to tonal signals were not significantly different for the three maskers. However, responses to speech signals in the four-talker babble resulted in longer P3 latencies, smaller P3 amplitudes, poorer discrimination accuracy, and longer reaction times than in any of the other conditions. Results also demonstrate significant correlations between physiological and behavioral data. As latency of the P3 increased, reaction times also increased and sentence recognition scores decreased. CONCLUSION: The data confirm a differential effect of masker type on the P3 and behavioral responses and present evidence of interference by an informational masker to speech understanding at the level of the cortex. Results also validate the use of the P3 as a useful measure to demonstrate physiological correlates of informational masking.


Assuntos
Potenciais Evocados Auditivos , Mascaramento Perceptivo/fisiologia , Fonética , Percepção da Fala/fisiologia , Estimulação Acústica/métodos , Adulto , Discriminação Psicológica/fisiologia , Potenciais Evocados P300/fisiologia , Feminino , Humanos , Masculino , Ruído , Reconhecimento Fisiológico de Modelo/fisiologia , Desempenho Psicomotor , Tempo de Reação/fisiologia , Razão Sinal-Ruído , Adulto Jovem
20.
J Rehabil Res Dev ; 49(7): 1005-25, 2012.
Artigo em Inglês | MEDLINE | ID: mdl-23341276

RESUMO

Thirty-six blast-exposed patients and twenty-nine non-blast-exposed control subjects were tested on a battery of behavioral and electrophysiological tests that have been shown to be sensitive to central auditory processing deficits. Abnormal performance among the blast-exposed patients was assessed with reference to normative values established as the mean performance on each test by the control subjects plus or minus two standard deviations. Blast-exposed patients performed abnormally at rates significantly above that which would occur by chance on three of the behavioral tests of central auditory processing: the Gaps-In-Noise, Masking Level Difference, and Staggered Spondaic Words tests. The proportion of blast-exposed patients performing abnormally on a speech-in-noise test (Quick Speech-In-Noise) was also significantly above that expected by chance. These results suggest that, for some patients, blast exposure may lead to difficulties with hearing in complex auditory environments, even when peripheral hearing sensitivity is near normal limits.


Assuntos
Audiometria/métodos , Percepção Auditiva/fisiologia , Traumatismos por Explosões/fisiopatologia , Perda Auditiva/diagnóstico , Veteranos/estatística & dados numéricos , Adulto , Traumatismos por Explosões/complicações , Estudos de Casos e Controles , Potenciais Evocados Auditivos , Feminino , Perda Auditiva/etiologia , Testes Auditivos/métodos , Humanos , Masculino , Pessoa de Meia-Idade , Percepção da Fala/fisiologia , Análise e Desempenho de Tarefas
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...